Your browser doesn't support javascript.
Show: 20 | 50 | 100
Results 1 - 16 de 16
Filter
1.
2nd International Conference on Electronics and Renewable Systems, ICEARS 2023 ; : 1532-1537, 2023.
Article in English | Scopus | ID: covidwho-2298262

ABSTRACT

Face mask detection is the process of identifying whether a person is wearing a face mask or not in real-time through the use of computer vision and machine learning algorithms. This technology can be used in various applications, such as security systems at public transportation hubs or in hospitals, to ensure compliance with health and safety regulations during a pandemic or other infectious disease outbreaks. The technology works by analyzing images or video streams from cameras and computer vision techniques are used to detect the presence of a face mask on a person's face. The output of the system is a binary result (i.e., mask detected or not detected) or a more detailed result that provides information about the type of mask and its location on the face. © 2023 IEEE.

2.
4th International Conference on Recent Trends in Advanced Computing - Computer Vision and Machine Intelligence Paradigms for Sustainable Development Goals, ICRTAC-CVMIP 2021 ; 967:1-14, 2023.
Article in English | Scopus | ID: covidwho-2266942

ABSTRACT

During the pandemic, online classes are predominated. However, the new normal needs effective analysis of students' classroom engagement. Offline classes also have a potential threat to students' engagement before and especially after the post-covid. Facial Expressions Analysis has become essential in the learning environment, whether it is online or offline. The offline classroom environment is considered a problem environment. Since, it can be easily adapted to the online environment. Notably, in the PTZ camera environment, the recognition becomes more challenging due to varying face poses, limited Field-of-View (FOV), illumination conditions, effects of the continuous pan, zoom-in, and zoom-out. In this paper, facial expression-based student engagement analysis in a classroom environment is proposed. Face detection has been achieved by YOLO (You only look once) detector to find multiple faces in the classroom with maximum speed and accuracy. Consequently, by adopting the Ensemble of Robust Constrained Local Models (ERCLM) method, landmark points are localized in detected faces even in occlusion, and therefore, feature matching is performed. Besides, the matched landmark points are aligned by an affine transformation. Finally, having different expressions, the aligned faces are fed as input to Faster R-CNN (Faster Regions with Convolutional Neural Network). It recognizes behavioral activities such as Attentiveness (Zero-In (ZI)), Non-Attentiveness (NA), Day Dreaming (DD), Napping (N), Playing with Personal Stuff in Private (PPSP), and Talking to the Students' Behind (TSB). The proposed approach is demonstrated using the TCE classroom datasets and Online datasets. The proposed framework outperforms the state-of-the-art algorithms. © 2023, The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

3.
6th Computational Methods in Systems and Software, CoMeSySo 2022 ; 597 LNNS:37-53, 2023.
Article in English | Scopus | ID: covidwho-2248986

ABSTRACT

The COVID-19 outbreak has been causing immense damage to global health and has put the world under tremendous pressure since early 2020. The World Health Organization (WHO) has declared in March 2020 the novel coronavirus outbreak as a global pandemic. Testing of infected patients and early recognition of positive cases is considered a critical step in the fight against COVID-19 to avoid further spreading of this epidemic. As there are no fast and accurate tools available till now for the detection of COVID-19 positive cases, the need for supporting diagnostic tools has increased. Any technological method that can provide rapid and accurate detection will be very useful to medical professionals. However, there are several methods to detect COVID-19 positive cases that are typically performed based on chest X-ray images that contain relevant information about the COVID-19 virus. This paper goal is to introduce a Detectron2 and Faster R-CNN to diagnose COVID-19 automatically from X-ray images. In addition, this study could support non-radiologists with better localization of the disease by visual bounding box. © 2023, The Author(s), under exclusive license to Springer Nature Switzerland AG.

4.
Med Phys ; 2022 Aug 11.
Article in English | MEDLINE | ID: covidwho-2287223

ABSTRACT

BACKGROUND: Auxiliary diagnosis and monitoring of lung diseases based on lung ultrasound (LUS) images is important clinical research. A-line is one of the most common indicators of LUS that can offer support for the assessment of lung diseases. A traditional A-line detection method mainly relies on experienced clinicians, which is inefficient and cannot meet the needs of these areas with backward medical level. Therefore, how to realize the automatic detection of A-line in LUS image is important. PURPOSE: In order to solve the disadvantages of traditional A-line detection methods, realize automatic and accurate detection, and provide theoretical support for clinical application, we proposed a novel A-line detection method for LUS images with different probe types in this paper. METHODS: First, the improved Faster R-CNN model with a selection strategy of localization box was designed to accurately locate the pleural line. Then, the LUS image below the pleural line was segmented for independent analysis excluding the influence of other similar structures. Next, image-processing methods based on total variation, matched filter, and gray difference were applied to achieve the automatic A-line detection. Finally, the "depth" index was designed to verify the accuracy by judging whether the automatic measurement results belong to corresponding manual results (±5%). In experiments, 3000 convex array LUS images were used for training and validating the improved pleural line localization model by five-fold cross validation. 850 convex array LUS images and 1080 linear array LUS images were used for testing the trained pleural line localization model and the proposed image-processing-based A-line detection method. The accuracy analysis, error statistics, and Harsdorff distance were employed to evaluate the experimental results. RESULTS: After 100 epochs, the mean loss value of training and validation set of improved Faster R-CNN model reached 0.6540 and 0.7882, with the validation accuracy of 98.70%. The trained pleural line localization model was applied in the testing set of convex and linear probes and reached the accuracy of 97.88% and 97.11%, respectively, which were 3.83% and 8.70% higher than the original Faster R-CNN model. The accuracy, sensitivity, and specificity of A-line detection reached 95.41%, 0.9244%, 0.9875%, and 94.63%, 0.9230%, and 0.9766% for convex and linear probes, respectively. Compared to the experienced clinicians' results, the mean value and p value of depth error were 1.5342 ± 1.2097 and 0.9021, respectively, and the Harsdorff distance was 5.7305 ± 1.8311. In addition, the accumulated accuracy of the two-stage experiment (pleural line localization and A-line detection) was calculated as the final accuracy of the whole A-line detection system. They were 93.39% and 91.90% for convex and linear probes, respectively, which were higher than these previous methods. CONCLUSIONS: The proposed method combining image processing and deep learning can automatically and accurately detect A-line in LUS images with different probe types, which has important application value for clinical diagnosis.

5.
2022 IEEE-EMBS International Conference on Biomedical and Health Informatics, BHI 2022 ; 2022.
Article in English | Scopus | ID: covidwho-2161375

ABSTRACT

The arising of SARS-CoV-2 or 2019 novel coron-avirus in December 2019 have prioritized research on pulmonary diseases diagnosis and prognosis, especially using artificial intelligence (AI) and Deep Learning (DL). Polymerase Chain Reaction (PCR) is the most widely used technique to detect SARS-CoV-2, with a 0.12% false negative rate. While 75% of the hospitalized cases develop pneumonia caused by the virus, patients can still develop bacterial pneumonia. COVID-19 pneumonia can be diagnosed based on clinical data and Computed Tomography (CT scan). However, Chest X-rays are faster, cheaper, emit less radiations, and can be performed on bed-side. In this article, we extend the application of VGG-16 based Faster Region-Based Convolutional Neural Network (Faster R-CNN) to the detection of Pneumonia and COVID-19 in Chest X-ray images using several public datasets of total images count ranging from 2122 to 18455 Chest X-rays, and study the impact of several hyper-parameters such as objectness threshold and epochs count and length, to optimize the model's performance. Our results comply with the state of the art of Faster R-CNN in pneumonia detection as the best accuracy achieved is 65%. For COVID-19 detection, Faster R-CNN achieves a 90% validation accuracy. © 2022 IEEE.

6.
5th International Conference on Advanced Electronic Materials, Computers and Software Engineering, AEMCSE 2022 ; : 629-633, 2022.
Article in English | Scopus | ID: covidwho-2161367

ABSTRACT

In the context of the global raging of the new coronavirus (COVID-19), to effectively prevent the spread of the new coronavirus in the crowd, many places require the wearing of masks in public places. In response to this problem, this paper proposes a mask wearing detection based on the FasterRCNN algorithm. The method uses ResNet-50 to extract convolution features and selects high-quality suggestion boxes through NMS (non-maximum suppression), which increases the detection of incorrectly wearing masks, which can play a reminder role in practical applications and further improve the prevention of epidemics, and the final experiments show that the wearing of masks can be accurately and efficiently detected through the steps of feature extraction and prediction frame generation. © 2022 IEEE.

7.
Acm Transactions on Spatial Algorithms and Systems ; 8(3), 2022.
Article in English | Web of Science | ID: covidwho-2153117

ABSTRACT

The rapid spreading of coronavirus (COVID-19) caused severe respiratory infections affecting the lungs. Automatic diagnosis helps to fight against COVID-19 in community outbreaks. Medical imaging technology can reinforce disease monitoring and detection facilities with the advancement of computer vision. Unfortunately, deep learning models are facing starvation of more generalized datasets as the data repositories of COVID-19 are not rich enough to provide significant distinct features. To address the limitation, this article describes the generation of synthetic images of COVID-19 along with other chest infections with distinct features by empirical top entropy-based patch selection approach using the generative adversarial network. After that, a diagnosis is performed through a faster region-based convolutional neural network using 6,406 synthetic as well as 3,933 original chest X-ray images of different chest infections, which also addressed the data imbalance problems and not recumbent to a particular class. The experiment confirms a satisfactory COVID-19 diagnosis accuracy of 99.16% in a multi-class scenario.

8.
2022 IEEE International Conference on Data Science and Information System, ICDSIS 2022 ; 2022.
Article in English | Scopus | ID: covidwho-2136233

ABSTRACT

The current census has observed, maintaining Social Distance in public places is one of the most significant factor in curbing the spread of the Corona Virus. This makes it essential for the authorities of these public places, governmental or non-governmental, to monitor the proper execution of this protocol. The risks of virus spread can be minimized by avoiding physical contact among people. This notion of monitoring Social Distance is trending and raw in its development. Although there are available solutions to this problem using YOLO Model or Tensolflow Object detection api, the purpose of this project is to provide a deep learning model for social distance tracking. In Deep Learning and Artificial Intelligence, Transformers is a technology that has its traditional application in the Natural Language Processing, thought it's application in object detection is novel and intuitive. This has techniques that use self-attention to overcome the limitations presented by inductive convolutional biases in an efficient way. Here the individuals that are detected using this model will have a bounding box specific to that person's dimensions and physical location on the image plane. The centroid of these bounding boxes will act as coordinate point of location and relative euclidean distances between these points will be used as a parameter to differentiate between the followers and violators of the said protocol. A violation threshold is also established to evaluate whether or not the distance value infringes the minimum social distance threshold. In This project we are working with the concepts of Computer Vision and Deep Learning algorithms to handle the task. © 2022 IEEE.

9.
2021 8th International Conference on Electrical Engineering, Computerscience and Informatics (Eecsi) 2021 ; : 359-364, 2021.
Article in English | Web of Science | ID: covidwho-2040793

ABSTRACT

Monitoring the number of people is essential to estimate the level of crowds in a public area, especially during this Covid19 pandemic. CCTV recording needs to process for counting the number of people in a crowd at a specific time. However, counting people on CCTV is not easy. It can be approached by detecting a specific object from a compilation of frames with a certain size that makes up the image. This study proposed the Faster Region-Convolutional Neural Networks (Faster R-CNN) method with ResNet50 to count the number of people in a crowd from the low-resolution image from CCTV. The research gave that crowd counting with the Faster RCNN needs consideration to choose appropriate architecture. ResNet50 architecture provided an accuracy of 97.20% in detecting the number of people in the crowd image. It was compared to other detectors based on previous studies with the same dataset and gave the highest accuracy. Region Proposal Networks makes Faster RCNN robust. Although the various number of people in the crowd image, quality of the dataset, and anchor aspect ratio values provide good results improve accuracy. Besides, the appropriate learning parameters make the method performance more optimal. This configuration can be applied to real-time testing so that it gave the best results of 86% using Faster RCNN and ResNet50.

10.
International Conference on Intelligent Emerging Methods of Artificial Intelligence and Cloud Computing, IEMAICLOUD 2021 ; 273:540-549, 2022.
Article in English | Scopus | ID: covidwho-1872295

ABSTRACT

The coronavirus disease 2019 has caused a worldwide catastrophe with its destructive spreading and causing death of more than 2.47 million people around the globe. In the current circumstance, most of the countries are trying to implement social distancing, wearing masks, extensive testing, and contact tracing strategies to curb the virus outbreaks. Maintaining adequate social or physical distance is believed to be a sufficient precautionary measure (standard) against the spread of the pandemic infection. This research paper has two different contributions of social distance measurement and face mask detection using various deep learning approaches. In the first section, we have monitored the social distance where we have detected people by examining a video feed with SSD-MobileNet and Faster R-CNN ResNet50 deep learning algorithms. Next, the image is converted into an overhead view to measure the specific distance among people to ensure safe physical distancing. In the second section, we have detected the face masks used by the people by implementing MobileNetV2 convolutional neural network architecture. Hence, we have used computer vision to find the region of interest of a face, and finally, we have found that the mask is in the face or not. Both of our social distance measurement and face mask detection systems offer high accuracy. As for the social distance monitoring, the accuracy greatly depends on the people detection, and the execution time is 30 ms and 89 ms for SSD-MobileNet and Faster R-CNN ResNet50, respectively. For the face mask detection, we obtained 99% accuracy, and it is checked in real-time so that we can prove that our model is not overfitting and it performs well outside our dataset in real-time camera. © 2022, The Author(s), under exclusive license to Springer Nature Switzerland AG.

11.
Sensors ; 22(10):3824, 2022.
Article in English | ProQuest Central | ID: covidwho-1871112

ABSTRACT

The purpose of this paper is to study the recognition of ships and their structures to improve the safety of drone operations engaged in shore-to-ship drone delivery service. This study has developed a system that can distinguish between ships and their structures by using a convolutional neural network (CNN). First, the dataset of the Marine Traffic Management Net is described and CNN’s object sensing based on the Detectron2 platform is discussed. There will also be a description of the experiment and performance. In addition, this study has been conducted based on actual drone delivery operations—the first air delivery service by drones in Korea.

12.
16th International Conference on Bio-Inspired Computing: Theories and Applications, BIC-TA 2021 ; 1566 CCIS:346-357, 2022.
Article in English | Scopus | ID: covidwho-1797666

ABSTRACT

Due to COVID-19, intelligent thermal imagers are widely used all over the world. Since intelligent thermal imagers usually require real-time temperature measurement, it is significant to find a method to quickly and accurately detect human faces in thermal infrared images. This paper mainly proposes two different methods. One is to use image processing methods and face detected from visible images to determine the position of the face in the infrared image, while the other is to use target detection algorithms on infrared images, including YOLOv3 and Faster R-CNN. This paper uses the two methods on a self-collected dataset containing 944 pairs of visible and infrared images and observes the robustness of methods by adding random noise to images. Experiments show that the first one has much lower latency and the latter one has higher accuracy in both cases. © 2022, Springer Nature Singapore Pte Ltd.

13.
3rd IEEE International Conference on Frontiers Technology of Information and Computer, ICFTIC 2021 ; : 773-778, 2021.
Article in English | Scopus | ID: covidwho-1707937

ABSTRACT

Wearing masks as one of the most effective ways to diminish the transmission of COVID-19 increases the demand for automatic face mask detection in all countries. Face masks belong to the small objects category in images, thus introducing the challenge of training a robust face mask detector, particularly for small object detection. Feature Pyramids derived from deep convolutional neural networks are commonly used to achieve scale-invariant object detection;however, it does not reach the same level of performance in detecting face masks as in detecting larger objects. This work proposed two methods: fully utilizing the feature map extract from the neural network by adding small multiscale anchors on the last feature map, which contains the highest resolution information. The other is to replace the standard IoU calculation with a tolerant strategy for small objects. Using these two methods, we improve the accuracy of small object detection while increasing the general average precision. © 2021 IEEE.

14.
5th Computational Methods in Systems and Software, CoMeSySo 2021 ; 231 LNNS:356-371, 2021.
Article in English | Scopus | ID: covidwho-1565292

ABSTRACT

Physical distancing is essential to help prevent the spread of Coronavirus Disease 2019 (COVID-19). However, people often are unaware of their physical distances from each other. This study developed an autonomous system that monitors and admonishes physical distancing violators by spotting lights at them and playing a pre-recorded audio to remind them to keep their distance. It uses a downward-looking Fisheye IP camera to take a picture of the platform every 30 s. It uses Faster R-CNN with Detectron2 Library to detect people. Violators are then identified as individuals who have less than 1 m of computed Euclidean distance between other people. The environment being monitored was divided into four quadrants, each of which is lit with COB LEDs (controlled by an Arduino UNO) if they contain at least one violator. At the same time, a corresponding audio admonishment is being played through a loudspeaker. The performed single-sample t-test proves the accuracy of the Euclidean distance measurement of the system since the probability value of 0.6969, derived from the computed t-statistic of 0.3914, is significantly greater two-tailed significance level of 0.05. © 2021, The Author(s), under exclusive license to Springer Nature Switzerland AG.

15.
Multimed Tools Appl ; 80(13): 19753-19768, 2021.
Article in English | MEDLINE | ID: covidwho-1120973

ABSTRACT

There are many solutions to prevent the spread of the COVID-19 virus and one of the most effective solutions is wearing a face mask. Almost everyone is wearing face masks at all times in public places during the coronavirus pandemic. This encourages us to explore face mask detection technology to monitor people wearing masks in public places. Most recent and advanced face mask detection approaches are designed using deep learning. In this article, two state-of-the-art object detection models, namely, YOLOv3 and faster R-CNN are used to achieve this task. The authors have trained both the models on a dataset that consists of images of people of two categories that are with and without face masks. This work proposes a technique that will draw bounding boxes (red or green) around the faces of people, based on whether a person is wearing a mask or not, and keeps the record of the ratio of people wearing face masks on the daily basis. The authors have also compared the performance of both the models i.e., their precision rate and inference time.

16.
Inform Med Unlocked ; 20: 100405, 2020.
Article in English | MEDLINE | ID: covidwho-714773

ABSTRACT

COVID-19 or novel coronavirus disease, which has already been declared as a worldwide pandemic, at first had an outbreak in a large city of China, named Wuhan. More than two hundred countries around the world have already been affected by this severe virus as it spreads by human interaction. Moreover, the symptoms of novel coronavirus are quite similar to the general seasonal flu. Screening of infected patients is considered as a critical step in the fight against COVID-19. As there are no distinctive COVID-19 positive case detection tools available, the need for supporting diagnostic tools has increased. Therefore, it is highly relevant to recognize positive cases as early as possible to avoid further spreading of this epidemic. However, there are several methods to detect COVID-19 positive patients, which are typically performed based on respiratory samples and among them, a critical approach for treatment is radiologic imaging or X-Ray imaging. Recent findings from X-Ray imaging techniques suggest that such images contain relevant information about the SARS-CoV-2 virus. Application of Deep Neural Network (DNN) techniques coupled with radiological imaging can be helpful in the accurate identification of this disease, and can also be supportive in overcoming the issue of a shortage of trained physicians in remote communities. In this article, we have introduced a VGG-16 (Visual Geometry Group, also called OxfordNet) Network-based Faster Regions with Convolutional Neural Networks (Faster R-CNN) framework to detect COVID-19 patients from chest X-Ray images using an available open-source dataset. Our proposed approach provides a classification accuracy of 97.36%, 97.65% of sensitivity, and a precision of 99.28%. Therefore, we believe this proposed method might be of assistance for health professionals to validate their initial assessment towards COVID-19 patients.

SELECTION OF CITATIONS
SEARCH DETAIL